فهرست مطالب

نشریه پژوهشهای ریاضی
سال ششم شماره 3 (پیاپی 14، پاییز 1399)

  • تاریخ انتشار: 1399/10/24
  • تعداد عناوین: 15
|
  • محمد اسلامیان* صفحات 319-332

    در این مقاله، با استفاده از روش تندترین کاهش ترکیبی و الگوریتم چسبندگی، الگوریتم جدیدی را برای حل مسئله نامساوی تغییراتی ارایه می دهیم. دنباله تولید شده به وسیله این الگوریتم، همگرای قوی به عضو مشترک از مجموعه نقاط صفر مشترک خانواده ای از عملگرهای قویا یکنوای معکوس و مجموعه نقاط ثابت مشترک خانواده ای از عملگرهای نیم انقباضی است. هم چنین نشان می دهیم دنباله تولید شده به وسیله این الگوریتم همگرای قوی به یک جواب مسئله نامساوی تغییراتی روی مجموعه نقاط ثابت مشترک خانواده ای متناهی از عملگرهای شبه  ناانبساطی و اکید شبه انقباضی  در یک فضای هیلبرت است. در پایان کاربردهایی از این نتایج برای حل مسئله نقطه ثابت مشترک شکافتنی به منظور یافتن عضوی در مجموعه نقاط ثابت مشترک خانواده متناهی از نگاشت های اکید شبه انقباضی در یک فضای هیلبرت، چنان که تصویر آن تحت یک عملگر خطی  وکراندار در مجموعه نقاط ثابت مشترک خانواده ای از نگاشت های ناانبساطی  قرار گیرد، ارایه می دهیم

    کلیدواژگان: مسئله نامساوی تغییراتی، نگاشت های نیم انقباضی، نقطه ثابت، عملگرهای قویا یکنوای معکوس
  • محی الدین ایزدی*، عبدالله جلیلیان صفحات 333-346

    برآوردگر هسته ای استاندارد دچار مشکل اریبی مرزی برای تابع های چگالی احتمال توزیع هایی با تکیه گاه اعداد حقیقی مثبت است. برآوردگرهای هسته ای گاما و سری های متعامد دو جای گزین برای برآوردگر هسته ای استاندارد هستند که ‏مبرا از اریبی مرز‎‏ی اند‎. در این مقاله یک بررسی شبیه سازی به منظور مقایسه عملکرد کوچک نمونه ای برآوردگرهای هسته ای گاما و برآوردگر سری های متعامد با پایه های لاگر و ارمیت اجرا شده است. بر اساس این مطالعه شبیه سازی دریافتیم که اگر انتخاب پایه برای برآوردگر سری های متعامد بر اساس شناخت جزیی از شکل تابع چگالی هدف باشد، برآوردگر سری های متعامد می تواند عملکردی بهتر از برآوردگر هسته ای گاما داشته باشد. ‎

    کلیدواژگان: اریبی مرزی، میانگین جمع بسته توان دوم خطا
  • سعید باقری* صفحات 347-362

    فرض کنید H یک جبر شبه-هوپف روی حلقه جابه جایی k و A یک جبر هم-مدولی روی H باشد. در این مقاله  نشان می دهیم که گرچه رسته دو-مدول ها، AMA، لزوما یک رسته تکواره ای  نیست، با این وجود هم-عمل عمل رسته HMH روی  AMAرا باعث شده و از این رهگذر نسخه های مناسبی از خود-تابعگون های تانسور و Hom از رسته  AMAرا معرفی کرده و الحاقی بین این خود-تابعگون ها را توصیف می کنیم. هم چنین یکه ها و هم-یکه های وابسته به آنها را صریحا محاسبه می کنیم.

    کلیدواژگان: جبر (شبه-) هوپف، جبر هم- مدولی، رسته تکواره ای، عمل یک رسته تکواره ای
  • علی بیاتی اشکفتکی* صفحات 363-372

    ماتریس های تصادفی دوگانه نقشی اساسی در نظریه احاطه سازی در ابعاد متناهی دارند. قضیه بیرکهوف رابطه میان ماتریس های تصادفی دوگانه و جای گشت های n×n  را بیان می کند. این ماتریس ها در ابعاد نامتناهی به عملگرهای تصادفی دوگانه و جای گشت های روی فضاهایlp(I)  گسترش می یابند. در این مقاله ابتدا عملگرهای پوچ دوگانه را معرفی کرده و خواص مهمی از آنها را بررسی می کنیم. سپس به کمک عملگرهای پوچ دوگانه قضیه بیرکهوف را در ابعاد نامتناهی بررسی کنیم.

    کلیدواژگان: عملگر تصادفی دوگانه، عملگر پوچ دوگانه، قضیه بیرکهوف، نقاط لبه ای. رده بندی ریاضی(2010): 15B48، 15B51 .47B37
  • مریم خادمی*، نیما شیخ خانی، پونه خدابخش صفحات 373-386

    پیشرفت های اخیر شبکه ‏های اجتماعی آنلاین به ویژه کاربردهای آن در دنیای فناوری و اطلاعات مدرن، موجب گسترش چشم گیر نظریه های گراف و بازی شده است و توجه بسیاری از محققان ریاضی، متخصصان علوم کامپیوتر و تحلیل گران آماری را به خود جلب کرده است. یکی از ویژگی‏ های مهم و کلیدی شبکه‏ های اجتماعی این است که گسترش روابط بین افراد می‏توانند در تصمیم ‏گیری آنها، تاثیر به سزای داشته باشد. لذا یکی از مباحث مطرح و کاربردی در شبکه های اجتماعی، یافتن تاثیرگذارترین و بانفوذترین افراد در راستای بیشینه سازی تاثیر فعالیت های آنها در ایجاد تبلیغات ویروسی در خرید کالا، پخش شایعات مخرب، انتشار اخبار کاذب، مهندسی انتخابات و... است. در این مقاله، ابتدا به بررسی انتشار میان گره ‏ها با استفاده از مرکزیت مقدار شاپلی، تقسیم یک شبکه به جوامع کوچک تر و مدل آبشاری در نظریه بازی ها می پردازیم. سپس برای یافتن تاثیرگذارترین و با نفوذترین افراد در گراف شبکه های اجتماعی الگوریتم CSCS پیشنهاد گردیده که روی مجموعه داده ‏های مختلفی پیاده سازی شده است. در نهایت، نتایج الگوریتم پیشنهادی با نتایج سایر الگوریتم‏ های موجود مقایسه شده است.

    کلیدواژگان: گراف شبکه های اجتماعی، نظریه بازی، بیشینه سازی نفوذ، مقدار شاپلی، جوامع، الگوریتم CSCS
  • عباس خادمی، مجید سلیمانی دامنه* صفحات 387-392

    در این مقاله، یک شرط لازم بهینگی برای مسئله ای خاص در بهینه سازی غیرخطی، تحت عنوان مسئله با قید تنکی، را بررسی می کنیم. این مسئله به کمینه کردن تابعی به طور پیوسته مشتق پذیر تحت یک محدودیت تنکی روی متغیر می پردازد. نشان می دهیم که، در حالت کلی، L-ایستایی یک شرط لازم بهینگی برای مسئله با قید تنکی است. این خاصیت در ادبیات موضوع تحت فرض لیپ شیتز بودن عملگر گرادیان اثبات شده است.

    کلیدواژگان: بهینه سازی غیرخطی، مسائل با قید تنکی، بهینگی، ‐Lایستایی
  • مهدی رشیدی کوچی* صفحات 393-404

    در این مقاله مفهوم مجموعه های موجک روی گروه های آبلی موضعا فشرده با شبکه یکنواخت تعریف شده است. این تعریف تعمیمی از مجموعه های موجک در فضای اقلیدسی است. سپس با استفاده از تبدیل فوریه و تجزیه چند ریزه ساز این مجموعه ها مشخص شده اند. در ادامه مجموعه های مقیاس تعمیم یافته روی گروه های آبلی موضعا فشرده تعریف و بررسی شده اند و ارتباط بین مجموعه های مقیاس تعمیم یافته و مجموعه های موجک بیان شده است. در پایان با تعریف تابع بعد موجک روی گروه های آبلی موضعا فشرده، مجموعه های مقیاس تعمیم یافته مشخص شده  و رابطه آنها با مجموعه های موجک بررسی شده است.

    کلیدواژگان: موجک، گروه آبلی موضعا فشرده، مجموعه موجک، تجزیه چند ریزه ساز، مجموعه های مقیاس تعمیم یافته
  • مریم شرفی*، فریده توانگر، شهره انعامی، حسین نادب صفحات 405-416

    توزیع کاپا یکی از توزیع های چوله مثبت است که به منظور تجزیه و تحلیل داده های بارندگی، سرعت باد و جریان سیلاب استفاده می شود. در این مقاله ابتدا به بررسی توزیع کاپا سه پارامتری معرفی شده به وسیله پارک و همکاران [1]  پرداخته و سپس چهار روش برآوردیابی شامل روش گشتاوری، گشتاورهای خطی بیشینه درست نمایی و بیشینه حاصل ضرب فاصله ها را برای برآوردیابی پارامترهای این توزیع ارایه داده و با استفاده از یک بررسی شبیه سازی، به مقایسه عملکرد آنها پرداخته و در پایان، این روش ها، برای داده های مربوط به مجموع بارش ماهانه ایستگاه آبعلی استان تهران به کار گرفته می شود.

    کلیدواژگان: توزیع کاپا سه پارامتری، برآوردگر بیشینه درست نمایی، برآوردگر گشتاورهای خطی، برآوردگر بیشینه حاصل ضرب فاصله ها
  • داود فربد* صفحات 417-428

    در این مقاله یک توزیع فوق هندسی تعمیم یافته که به کمک فرایند تولد- مرگ و برای مدل بندی داده های بیوانفورماتیک ساخته شده است را در نظر می گیریم. تحت برقراری بعضی شرایط،  یک سیستم معادلات درست نمایی را به دست می آوریم که جواب حاصل از آن منطبق بر برآوردگرهای بیشینه درست نمایی  پارامترهای  مورد  نظر است. یک روش تقریبی همراه با بررسی شبیه سازی برای برآورد پارامترها ارایه می شود. هم چنین به منظور ارایه کاربردهای این توزیع، سه نوع داده واقعی در بیوانفورماتیک را با توزیع مورد نظر برازش داده و نتایج را با استفاده از شاخص های آماری با چهار توزیع گسسته دیگر مقایسه می کنیم که بر این اساس ملاحظه می شود توزیع فوق هندسی تعمیم یافته نسبت به چهار توزیع گسسته دیگر مدل مناسب تری است.

    کلیدواژگان: توزیع فوق هندسی تعمیم یافته، فرایند تولد - مرگ، بیوانفورماتیک، برآورد بیشینه درست نمایی، روش مونت کارلوی زنجیر مارکفی (MCMC).کد موضوع بندی ریاضی (2010): 10F62، 30F62، 10P62، 28J60
  • حجت فرزادفرد* صفحات 429-440

    اگر  f  یک تابع پیوسته بین فضاهای متریک باشد، مدول پیوستگی f تابعی است که به هر نقطه از دامنه و هر اپسیلون مثبت بزرگترین دلتای مثبتی را نظیر می کند که در تعریف پیوستگی صدق می کند. در این مقاله نشان می دهیم که خطی بودن مدول پیوستگی یک تابع روی اعداد حقیقی مشخصه ای برای ژیودزیک بودن آن است (کوتاهترین مسیر بین دو نقطه از یک فضای متریک را یک ژیودزیک می نامند).

    کلیدواژگان: ژئودزیک، مدول پیوستگی، طول مسیر
  • رحیم کارگر، علی عبادیان، نادر کنزی* صفحات 441-448

    فرض کنید رده همه توابع تحلیلی و نرمال شده در قرص واحد باشد. برای هر تابع  از خانواه  ضرایب لگاریتمی  به صورت زیر تعریف می شوند:هم چنین، زیررده  از  را به صورت زیر تعریف می کنیم که در آن " " رابطه تبعیت است. هدف ما در این مقاله تخمین دقیق نامساویها شامل ضرایب لگاریتمی برای توابعی است که به رده    تعلق دارند.

    کلیدواژگان: تابع تک ارز، ستاره واری، تبعیت، ضرایب لگاریتمی، ضرب پیچشی
  • رضا مختاری*، الهام فیض اللهی صفحات 449-464

    در این مقاله قصد داریم طرح های تفاضلات متناهی نیمه-لاگرانژی را برای دستگاه معادلات برگرز دوبعدی تعمیم دهیم. طرح پیشنهادی به شرط کورانت-فردریش-لوی (CFL) محدود نیست و بنابراین می توان اندازه گام های زمانی بزرگی انتخاب کرد. طرح پیشنهادی قابلیت موازی سازی خوبی دارد و در اصل یک طرح یک-بعدی موضعی (LOD) است که بر اساس راه کار معادله تغییر یافته به دست آمده است و برای حل دستگاه معادلات برگرز به کار می رود. یک ویژگی خوب روش مطرح شده آن است که در هر تکرار زمانی کافی است دو دستگاه خطی سه قطری حل شود و از این نظر حجم محاسباتی روش پایین است.

    کلیدواژگان: دستگاه معادلات برگرز، طرح تفاضلات متناهی نیمه-لاگرانژی، راه کار معادله تغییریافته، طرح یک-بعدی موضعی
  • نویده مدرسی، سعید رضاخواه*، شیرین شعاعی صفحات 465-476

    مدل های خود بازگشتی میانگین متحرک زمان پیوسته (کارما) با محرک لوی با ویژگی ایستایی نموها دارای یک محدودیت قوی است و باعث ایستایی فرایند می شوند. در این مقاله با تعمیم فرآیند کارما به حالتی که محرک فرآیند نیمه لوی باشد زمینه ای ایجاد می شود که فرایند کارما یک فرایند دوره ای و لذا دارای کاربرد بسیار وسیع تر است. بر این اساس، ویژگی های آماری فرایند کارما با محرک نیمه لوی بررسی شده و با استفاده از داده های شبیه سازی شده در حالت گسسته، خواص آماری اثبات شده تایید می شود.

    کلیدواژگان: اندازه نیمه لوی، فرایندهای دوره ای، مدل میانگین متحرک خود بازگشتی زمان پیوسته.
  • احمد ملابهرامی* صفحات 477-486

    در این مقاله برای تحقیق روی معادلات انتگرال فردهلم نوع دوم چند بعدی از روش هسته تبهگن اصلاح شده استفاده می شود. اصلاح یاد شده، از اعمال تقریب به کار رفته برای جداسازی هسته برای تابع منبع حاصل می شود. برای حصول تقریب های مورد نیاز از روش درون یابی لاگرانژی استفاده می شود. آنالیز خطا و همگرایی روش به صورت دقیق ارایه می شود. کارایی روش با اعمال آن روی چند مثال نشان داده شده و مقایسه ای نیز با برخی روش ها انجام می شود

    کلیدواژگان: معادلات انتگرال فردهلم نوع دوم چندبعدی، روش درونیابی لاگرانژی چندبعدی، روش هسته تبهگن، روش محاسبه مستقیم، روش هسته تبهگن اصلاح شده. رده بندی ریاضی (2010): 05B45
  • طیبه واعظی زاده*، طیبه پارسایی، فرشته فروزش صفحات 487-500

    در بررسی بیماری های ویروسی در گیاهان، واکنش سیستم ایمنی گیاه نقش اساسی ایفا می کند. در این مقاله، یک مدل ریاضی، بر اساس دستگاه معادلات دیفرانسیل با تاخیر زمانی برای واکنش سیستم ایمنی گیاه ارایه می شود. در ادامه، رفتار دینامیکی مدل حول نقاط تعادل بررسی شده و در پایان، یک گیاه در دو حالت متفاوت اورگانیک و غیراورگانیک در نظر گرفته می شود و رفتار منحنی های جواب با استفاده از نرم افزارمتلب بررسی می شود.

    کلیدواژگان: مدل ریاضی، نقطه تعادل، پایداری، انشعاب هاف.رده بندی ریاضی (2010): .37C75، 37H20، 00A71
|
  • Mohammad Eslamian* Pages 319-332

    In this paper, by using the viscosity iterative method and the hybrid steepest-descent method, we present a new algorithm for solving the variational inequality problem. The sequence generated by this algorithm is strong convergence to a common element of the set of common zero points of a finite family of inverse strongly monotone operators and the set of common fixed points of a finite family of demi-contractive mappings. Also, we prove that the sequence generated by this algorithm is strong convergence to a solution of a system of variational inequalities over the set of common fixed points of quasi-nonexpansive mappings and strict pseudo-contractive mappings in a Hilbert space. Finally, some applications of this results are present for solving the split common fixed point problem, which entails finding a point which belongs to the set of common fixed points of a finite family of of strict pseudo-contractive mappings in a Hilbert space such that its image under a linear transformation belongs to the set of common fixed points of a finite family of nonexpasive mappings in the image space.

    Keywords: Variational inequality problem, Demi-contractive mappings, Fixed point, strict pseudo-contractive mappings
  • Muhyiddin Izadi*, Abdollah Jalilian Pages 333-346
    Introduction

    Estimation of a probability density function is an important area of nonparametric statistical inference that has received much attention in recent decades. The kernel method is widely used in nonparametric estimation of the probability density function of an absolutely continuous distribution with support on the whole real line. However, for a distribution with support on a subset of the real line, the kernel density estimator with fixed symmetric kernels encounters bias at the boundaries of the support, which is known as the boundary bias issue. This is due to smoothing data near the boundary points by the fixed symmetric kernel that leads to allocating probability density to outside of the distribution’s support (see Silverman, 1986).There are many applications, such as reliability, insurance and life testing, dealing with non-negative data and estimating the probability density function of distributions with support on the non-negative real line is the object of interest.  Using the kernel estimator with fixed symmetric kernels in these cases results in the boundary bias issue at the origin. A number of methods have been proposed to avoid the boundary bias issue at the origin. A simple remedy is to replace symmetric kernels by asymmetric kernels which never assign density to negative values. The Gamma kernels proposed by Chen (2000) are the effective asymmetric kernels to estimate the probability density function of distributions on the non-negative real line.Orthogonal series estimators form another class of nonparametric probability density estimators, which go back to Cencov (1964). In this approach, as reviewed in Efromovich (2010), the target probability function is expanded in terms of a sequence of orthogonal basis functions. After selecting a suitable sequence of orthogonal basis functions, the observed data are used to estimated the coefficients of the expansion in order to obtain the orthogonal series density estimator. Similar to kernel estimators, under some mild conditions the orthogonal series estimators have appealing large sample properties. Moreover, the boundary issue can be avoided by using orthogonal density estimators with suitable basis functions.Although small sample properties of asymmetric kernel estimators with the Gamma kernels and orthogonal series estimators are well-studied separately, but to the best of our knowledge, there have been no reports of comparing their performance in estimating the probability density function of distributions on the non-negative real line. In this paper‎, a simulation study is conducted to compare the small-sample performance of the Gamma kernel estimators and orthogonal series estimators for a set of distributions on the positive real line.
     

    Material and methods

    Following Malec and Schienle (2014), we consider six parameter settings for the generalized F distribution to obtain probability density functions with different shapes, near-origin behaviors and tail decays (Figure 2). Based on 5000 simulations from any of these density functions with sample sizes , we estimate the target density function using the type I and II Gamma kernel estimators and the orthogonal series estimators with Hermite and Laguerre basis functions and compute the mean integrated squared error (MISE). The bandwidth parameter in the Gamma kernel estimators and the cutoff and smoothing parameters in the orthogonal series estimators are significantly affect the performance of the estimators. We use optimal bandwidths for the Gamma kernel estimators and optimal cutoff and smoothing parameters of the orthogonal series estimators to avoid variations due to uncertainty of tuning parameters. To obtain the optimal tuning parameters for each target density, we compute and minimize the MISE with respect to the tuning parameters based on additional 5000 simulations from the true density function.

    Results and discussion

     For each density function, the optimal tuning parameters and the MISE’s of the estimators are reported (Table 2). As expected from the large sample properties, increasing the sample size improves the performance of all estimators. The performances of estimators vary from cases to cases and there no considered estimator is the best in all cases. In all cases except one, the type II Gamma kernel estimator is superior to the type I Gamma kernel estimator, which is in agreement with Chen (2000) suggestion of preferring type II to type I Gamma kernel estimator. However, in one case the type I Gamma kernel estimator is better than all other estimators. In cases where the shape and near-origin behavior of the target density is similar to the Hermit or Laguerre basis functions, the corresponding orthogonal series estimator outperforms all the other competing estimators.

    Conclusion

    The following conclusions were drawn from this research to choose among the considered estimators.If the basis functions of the orthogonal series estimator are chosen to have similar shape and near-origin behavior to the target density function, then the corresponding orthogonal series estimator can outperform the Gamma kernel estimators.If there is no prior knowledge about the shape and near-origin behavior of the target density function and the sample size is relatively large (n=400),  then the type II Gamma kernel estimator can outperform the orthogonal series estimators.

    Keywords: Boundary bias, Generalized F distribution, Mean integrated squared error, Smoothing parameter
  • Saeid Bagheri* Pages 347-362

    Introduction:

     Over a commutative ring k, it is well known from the classical module theory that the tensor-endofunctor of is left adjoint to the Hom-endofunctor. The unit and counit of this adjunction is obtained trivially. For a k-bialgebra (H, 𝝻, 𝝸, 𝞓, 𝞮) the category of (H,H)-bimodules is a monoidal category: the tensor product Mof  two arbitrary (H,H)-bimodules M and N is again an (H,H)-bimodule in which the bimodule structure of  M is defined diagonally using the comultiplication. The associativity constraint of this category is formally trivial as in the category  and it is followed from the coassociativity of the comultiplication. An antipode is an algebra anti-homomorphism S;H→H which is the inverse of with respect to the convolution product in . A Hopf algebra is a bialgebra together with an antipode. As generalizations of the concepts bialgebra and Hopf algebra, V. G. Drinfeld introduced the concepts quasi-bialgebra and quasi-Hopf algebra respectively. A quasi-bialgebra over a commutative ring k is an associative algebra H with unit together with a comultiplication: HH and a counit: Hk satisfying all axioms of bialgebras except the coassociativity of 𝞓. However, the non-coassociativity of has been controlled by a normalized 3-cocycle 𝞍∊ H in such a way that the category  of (H,H)-bimodules is monoidal. In this case, the associativity constraint of the category is not the trivial one and it depends on the element 𝞍 ∊ H. However, we can yet consider tensor functors V and - as endofunctors of  . A quasi-antipode has been defined as a generalization of antipode. A quasi-Hopf algebra is a quasi-bialgebra together with a quasi-antipode (S,α,β). Let (H,𝝻,𝝸,𝞓,𝞮,S,α,β) be a quasi-Hopf algebra with a bijective quasi-antipode S. Then it has been shown that the tensor endofunctors V and - of  have right adjoints which are described in terms of Hom-functors. This means that is a biclosed monoidal category. Over a Hopf algebra H, the category of left H-comodules is monoidal and algebras and coalgebras can be defined inside this category. In this way, a left H-comdule algebra is defined as an algebra in the monoidal category of left H-comodules. However, if H is a quasi-bialgebra or even a quasi-Hopf algebra, because of non-coassociativity of comultiplication, we can not define an H-comodule algebra in this categorical language. To solve this problem, F. Hausser and F. Nill defined an H-comodule algebra in a formal way as a generalization of the quasi-bialgebra H and they considered some categories related to an H-comodule algebra such as the category of two-sided Hopf modules. In this article, the bimodule category  of a comodule algebra A over a quasi-Hopf algebra H is considered which is not necessarily monoidal. However, we define varieties of Tensor and Hom-endofunctors of this category and state Hom-tensor adjunctions between suitable pairs of these functors. In each case, we compute the unit and counit of adjunction explicitly.

    Material and methods:

     First we consider the category of left B-modules, where B is a left comodule algebra over a quasi-Hopf algebra H and we note that the left action of on yields some varieties of Tensor and Hom-endofunctors of  and we observe that every Tensor functor defined in this way has a right adjoint which is described as a Hom-functor. Next we extend this idea for the bimodule category.

    Results and discussion:

     First we note that although bimodule category of a comodule algebra A over a quasi-Hopf algebra H is not monoidal, the coaction of H on A yields an action of the bimodule category  (which is monoidal) on this bimodule category. This action, in turn, allows us to define Tensor and Hom-functors as endofunctors of the bimodule category. In any case we obtain Tensor and Hom-endofunctors with the bimodule structure defined diagonally using the coation of H on A and the quasi-antipode (S,α, β) of H. After that we state Hom-Tensor adjunction between corresponding pairs of Hom and Tensor endofunctors. The units and counits of adjunctions are not trivial as in the Hopf algebra case and they strongly depend on the invariants of the comodule algebra A and the quasi-antipode (S,α, β).

    Conclusion :

    The following conclusions were drawn from this research. Let H be a quasi-Hopf algebra with the quasi-antipod (S,α,β), (B,𝝀, a left H-comodule algebra and V be an (H,H)-bimodule. Then the pair is an adjoint pair of endofuntors with unit and counit given by where and are elements in H⊗B whose components are given in terms of quasi-antipode (S,α, β) and components of . Let H be a quasi-Hopf algebra with quasi-antipod (S,α,β), (A,ρ, a right H-comodule algebra and V be an (H,H)-bimodule. Then the pair is an adjoint pair of endofuntors with unit and counit given by where and are elements in A⊗H whose components are given in terms of quasi-antipode (S,α, β) and components of .

    Keywords: (quasi-) Hopf algebra, Comodule algebra, Monoidal category, Action of monoidal category
  • Ali Bayati Eshkaftaki* Pages 363-372

    Doubly stochastic matrices play a fundamental role in the theory of majorization. Birkhoffchr('39')s theorem explains the relation between $ntimes n$ doubly stochastic matrices and permutations. In this paper, we first introduce double-null  operators and we will find some important properties of them. Then with the help of double-null operators, we investigate Birkhoffchr('39')s theorem for descreate $l^p$ spaces.

    Keywords: Doubly stochastic operator, Double-null operator, Birkhoff's problem 111, Extreme points
  • Maryam Khademi*, Nima Sheikh Khani, Pooneh Khodabakhsh Pages 373-386
    Introduction

    In recent years, social network analysis gains great deal of attention. Social networks have various applications in different areas, namely predicting disease epidemic, search engines and viral advertisements. A key property of social networks is that interpersonal relationships can influence the decisions that they make. Finding the most influential nodes is important in social networks because such nodes can greatly affect many other people. In this paper, diffusion among nodes is investigated using the centrality of Shapely value and by dividing a network into communities in linear threshold and by the independent cascade model. Furthermore, this algorithm is evaluated by different data sets and compared with benchmarks.

    Material and methods

    In the proposed algorithm, according to the cooperative game, the value of each coalition is equal to the nodes inside it or at least adjacent to n nodes inside the coalition. After calculating  the Shapley value for each node in the social network, the Shapley value of the community is allocated to each community by summing the Shapley values of nodes inside the community which is used as a criteria for community evaluation.It is worth mentioning that a social network is divided into non-overlapping communities. In other words, each node should belong to only one community. As each community has a unique Shapley value, the fair selection algorithm is utilized to select nodes from communities. Subsequently, the fair selection is applied to the network. Finally, the selected nodes are evaluated to find k influential nodes.

    Results and discussion

    In this research, we propose a heuristic method called CSCS to tackle the influential maximization problem. This open problem focuses on influencing a larger number of individuals within a social network. The proposed algorithm is based on communities and centrality of the Shapley value.In order to evaluate the performance of our algorithm,  the CSCS algorithm has been applied to six different social networks and then results are compared with four benchmarks. Results of experiments demonstrate that in the linear threshold model, the CSCS algorithm has an acceptable performance by increasing the number of initial nodes. Furthermore, in the independent cascade model, the CSCS algorithm performance is often similar to the Community Based Degree algorithm and in some cases its even better.

    Conclusion



    The execution time of the algorithm is improved efficiently, and consequently, it can easily be applied on large social networks. In the linear threshold model, the CSCS algorithm has an acceptable performance by increasing the number of initial nodes. Furthermore, in the independent cascade model, the CSCS algorithm performance is often similar to the Community Based Degree algorithm and in some cases its even better. The CSCS algorithm also works well with disconnected social networks because it divides the network into communities.

    Keywords: Community, Game theory, Influence maximization, Shapley value, Social network
  • Abbas Khademi, Majid Soleimani-Damaneh* Pages 387-392

    In this paper, we investigate a necessary optimality condition for a specific problem in nonlinear programming, called sparsity constrained problem. This model involves minimizing a continuously differentiable function over a sparsity constraint. We show that L-stationarity is necessary for optimality in sparsity constrained problems in general. This important property has been proved in the literature under Lipschitzness of the gradient mapping.

    Keywords: Nonlinear programming, Sparsity constrained problems, L-stationarity, Optimality condition
  • Mehdi Rashidi Kouchi* Pages 393-404
    Introduction

    An orthonormal wavelet is a square-integrable function whose translates and dilates form an orthonormal basis for the Hilbert space . That is, given the unitary operators of translation for  and dilation , we call  an orthonormal wavelet if the set is an orthonormal basis for . This definition was later generalized to higher dimensions and to allow for other dilation and translation sets; let Hilbert space  and an n × n expansive matrix A (i.e. a matrix with eigenvalues bigger than 1) with integer entries, then dilation operator  is given by  and the translation operator  is given by  for . A finite set  is called multiwavelet if the set is an orthogonal basis for .The concept of a multiresolution analysis, abbreviated as MRA is Central to the theory of wavelets. There is much overlaps between wavelet analysis and Fourier analysis. Indeed, wavelets can be thought of as non-trigonometric Fourier series. Thus, Fourier analysis is used as a tool to investigate properties of wavelets.Another concept is wavelet set. The term wavelet set was coined by Dai and Larson inthe late 90s to describe a set W such that , the characteristic function of W, is the Fourier transform of an orthonormal wavelet on . At about the same time as the Dai and Larson paper, Fang and Wang first used the term MSF wavelet (minimally supported frequency wavelet) to describe wavelets whose Fourier transforms are supported on sets of the smallest possible measure. The importance of MSF wavelets as a source of examples and counterexamples has continued throughout wavelet history. A famous example due to Journe first showed that not all wavelets have an associated structure multiresolution analysis (MRA). The discovery of a non-MRA wavelet gave an important push to the development of more general structures such as frame multiresolution analyses (FMRAs) and generalized multiresolution analysis(GMRAs). In this paper we generalize wavelets and wavelet sets on locally compact Abelian group G with uniform lattice.
     

    Material and methods

    In this paper, we investigate wavelet sets on locally compact abelian groups with uniform lattice, where a uniform lattice H in LCA group G is a discrete subgroup of G such that the quotient group G/H is compact.  So we review some basic facts from the theory of LCA groups and harmonic analysis. Then we define wavelet sets on these groups and characterize them by using of Fourier transform and multiresolution analysis.

    Results and discussion

    We extend theory of wavelet sets on locally compact abelian groups with uniform lattice. This is a generalization of wavelet sets on Euclidean space. We characterize waveletsets by using of Fourier transform and multiresolution analysis. Also, we define generalized scaling sets and dimension functions on locally compact abelian groups and verify its relations with wavelet sets. Dimension functions for MSF wavelets are described by generalized scaling sets.In the setting of LCA groups, we define translation congruent and show wavelet sets are translation congruent, so we can define a map on G such that it is measurable, measure preserving and bijection.

    Conclusion

    The following conclusions were drawn from this research.Wavelet sets on locally compact groups by uniform lattice can be defined. This is a generalization of wavelet sets on Euclidean space.Characterization of wavelet sets on LCA groups can be done in different ways. A method is to use Fourier transform and translation congruent. Another way is to generaliz scaling set and dimension function.As an example, Cantor dyadic group is a non-trivial example that satisfies in the theory of wavelet sets on locally compact groups by uniform lattice. We find wavelet set and generalized scaling set for this group and show related wavelet is MRA wavelet.

    Keywords: wavelet, locally compact abelian group, wavelet set, multiresolution analysis, generalized scaling sets
  • Maryam Sharafi*, Farideh Tavangar, Shohre Enami Pages 405-416
    Introduction

    The kappa distribution was first introduced by Mielke (1973) and Mielke and Johnson (1973) for describing and analyzing precipitation data. This distribution is positively skewed and is widely applied when studying precipitation, wind speed and the stream flow data in hydrology. The kappa distribution has some advantages over gamma and log-normal distributions in fitting historical rainfall. Data It is because, unlike the latter two distributions, it has closed forms for the cumulative distribution function and quantile function. Due to this important feature, the kappa distribution attracts the attention of several researchers. Park et al. (2009) introduced the three-parameter kappa distribution and provided a description of the mathematical properties of the distribution and estimated the parameters by three methods. Also, they illustrated its applicability for rainfall data from Seoul, Korea. In this paper, we study the distribution and the estimation methods for the parameters considered by Park et al. (2009), and propose a new estimation method. Then, we will compare these estimation methods using a Monte Calro simulation study and a real dataset.

    Material and methods

    In this scheme, first we consider the three-parameter kappa distribution and study some of its properties and then estimate the parameters of the distribution by four methods. These methods are method of moment (MM), L-moments (LM), maximum likelihood (ML) and maximum product of spacing method (MPS). Using a Monte Carlo simulation study and a real data set, performance of these methods are compared.

    Results and discussion

    Comparing the performance of the proposed estimation methods in terms of bias and root of mean squares error (rmse), it can be concluded that the MPS method has a better performance due to its lower bias and rmse. The Kolmogorov-Smirnov test is applied for goodness-of-fit test in the three-parameter kappa distribution to the whole monthly rainfall data of Abali station in Tehran province. The results demonstrate that the MPSE method leads to better results than other mentioned methods.

    Conclusion

    The following conclusions were drawn from this research.The Monte Carlo simulation shows that the maximum product spacing method, which is proposed in this paper, is the best method for estimating the parameters of the three-parameter kappa distribution.The statistics and p-value of the Kolmogorov–Smirnov test show that the three-kappa distribution with the MPS method of estimation has better fit than the other methods.

    Keywords: Three-parameter kappa distribution, Maximum likelihood estimator, L-Moments estimator, Maximum product of spacings estimator
  • Davood Farbod* Pages 417-428

    In this paper, we consider a three parametric regularly varying generalized Hypergeometric distribution which have been generated by Birth-Death process for describing phenomena in bioinformatics (Danielian and Astola, 2006). Under satisfying some conditions, we obtain the system of likelihood equations which its solution coincides with the maximum likelihood estimators. The given maximum likelihood estimators are the same as some moment estimators.Moreover, an approximate computation of the maximum likelihood estimations for the unknown parameters is given. Using MCMC, simulation studies are proposed. Finally, in order to present applications, some real data sets in bioinformatics are fitted with the model. Based on some important criterions, this model is compared with four other discrete distributions in bioinformatics. We see that the generalized Hypergeometric distribution provides a better fit than four other discrete distributions.

    Keywords: Generalized Hypergeometric Distribution, Birth-Death Process, Bioinformatics, Maximum likelihood estimation (MLE), Markov Chain Monte Carlo (MCMC)
  • Hojjat Farzadfard* Pages 429-440
    Introduction

    This paper concerns an application of the modulus of continuity in characterizing geodesics. The modulus of continuity of a continuous function between metric spaces is a two variable function which assigns to each point and to each positive epsilon the greatest positive delta that satisfies the definition of continuity at that point. It is shown that if f is a function of the real numbers into a metric space, then linearity of its modulus of continuity implies that f is locally geodesic, that is, it locally preserves metric.Geodesics appear in several branches of mathematics as well as gravitational physics and general relativity. In mathematics, topics such as Riemannian geometry make extensive use of geodesics. In fact, geodesics represents the shortest paths. So for example, a mass that moves in the vacuum under the gravity passes through a geodesic.

    Material and Methods

    In Section 2 we first deal with some essential properties of the modulus of continuity. Then we mainly focus on continuous functions from the real numbers to metric spaces whose modulus of continuity is linear. In two theorems we characterize such functions and show that they are in fact locally geodesic.

    Results and Discussion

    The main result demonstrates that a continuous function with linear modulus of continuity falls into three classes: the class of functions that preserves metric with a coefficient, the class of functions that preserves metric on two rays whose intersection is a singleton, and the class of functions that preserves metric on two rays whose intersection is a closed interval of positive length. We then turn to locally convex metric spaces and show that the third class does not appear in such spaces. Finally, we apply our results to the case where both domain and range of function is the real line.

    Conclusion

    The following conclusions has been drawn in this paper: Essential properties of the modulus of continuity…….? Characterizing continuous functions of the real line into metric spaces whose modulus of continuity is linear.
    Dealing with continuous functions of the real line into locally convex metric spaces whose modulus of continuity is linear../files/site1/files/63/10.pdf

    Keywords: Geodesic, Modulus of continuity, The length of a path, Lipchitz map
  • Rahim Kargar, Ali Ebadian, Nader Kanzi* Pages 441-448
    Introduction

    Let  be the open unit disc in the complex plane  and  be the class of all functions of  which are analytic and normalized in  The subclass of consisting of all univalent functions  in  is denoted by  We say that a function  is said to be starlike function if and only if for all We denote by  the class of all satrlike functions in If  and  are two of the functions in  then we say that  is subordinate to  written  or  if there exists a Schwartz function  such that  for all  Furthermore, if the function  is univalent in  then we have the following equivalence: Also for and   their Hadamard product (or convolution) is defined by The logarithmic coefficients   of , denoted by , are defined by These coefficients play an important role for various estimates in the theory of univalent functions. For example, consider the Koebe function where  It is easy to see that the above function  has logarithmic coefficients where  and  Also for the function  we have and the sharp estimates and hold. We remark that the Fekete-Szego theorem is used. For  , the problem seems much harder and no significant upper bounds for  when  appear to be known. Moreover, the problem of finding the sharp upper bound for for  is still open for . The sharp upper bounds for modulus of logarithmic coefficients are known for functions in very few subclasses of . For functions in the class  it is easy to prove that  for  and the equality holds for the Koebe function. The celebrated de Brangeschr('39') inequalities (the former Milin conjecture) for univalent functions  state that where   with the equality if and only if De Branges used this inequality to prove the celebrated Bieberbach conjecture. Moreover, the de Brangeschr('39') inequalities have also been the source of many other interesting inequalities involving logarithmic coefficients of  such as    Let  denote the class of functions  and satisfying the following subordination relation where .

    Material and methods

    In this paper, first we obtain a subordination relation for the class  and by making use of this relation we give two sharp estimates for the logarithmic coefficients of the function 

    Results and discussion

    We obtain two sharp estimates for the logarithmic coefficients of the function 

    Conclusion

    The following conclusions were drawn from this research. Logarithmic coefficients  of the function  are estimated.

    Keywords: Univalent functions, Starlikeness, Subordination, Logarithmic coefficients, Hadamard product
  • Reza Mokhtari*, Elham Feizollahi Pages 449-464
    Introduction

    Following and generalizing the excellent work of Wang et ‎al. ‎[26], ‎we extract here some new scheme‎s, ‎based on the‎ ‎semi-Lagrangian discretization‎, ‎the modified equation theory‎, ‎and‎ ‎the local one-dimensional (LOD) scheme for computing solutions to a‎ ‎system of two-dimensional (2D) Burgerschr('39') equations‎. ‎A careful error‎ ‎analysis is carried out to demonstrate the accuracy of the‎ ‎proposed semi-Lagrangian finite difference methods‎. ‎By conducting‎ ‎numerical simulation to the nonlinear system of 2D Burgers‎’ ‎equations (3.1), ‎we show high accuracy and‎ ‎unconditional stability of the five-point implicit scheme (3.32-3.33)‎. ‎The results of‎ [26] and this paper confirm that the classical modified‎ ‎equation technique can be easily extended to various 1D as well as 2D‎ ‎nonlinear problems‎. ‎Furthermore, a new viewpoint is opened to‎ ‎develop efficient semi-Lagrangian methods‎. ‎Without using suitable‎ ‎interpolants for generating the solution values at the departure‎ ‎points‎, ‎we are not able to apply our method‎. ‎Instead of focusing our‎ ‎concentration on dealing with the effect of various interpolation‎ ‎methods‎, ‎we focus our attention on constructing some‎ ‎explicit and implicit schemes‎. ‎Among various interpolants which‎ ‎can be found in the literature [6], [21], ‎we just‎ ‎exploit the simplest and more applicable interpolants‎, ‎i.e.‎, ‎B-spline and Lagrange interpolants‎. Some semi-Lagrangian schemes are developed using the modified equation‎ ‎approach‎, i.e., ‎a six-point explicit method (which suffers from the‎ ‎limited stability condition)‎, ‎a six-point implicit method (which‎ ‎has unconditional stability but low order truncation error)‎, ‎and a‎ ‎five-point implicit method‎ (3.32-3.33) which has‎ ‎unconditional stability and high order truncation error‎. ‎In each‎ ‎step of this scheme, we must solve two tridiagonal linear systems‎ ‎and therefore its computational complexity is low‎. ‎Furthermore, it‎ ‎can be implemented in parallel‎. ‎As mentioned in [26], ‎this algorithm can be naturally‎ ‎extended to the development of efficient and accurate‎ ‎semi-Lagrangian schemes for many other types of nonlinear‎ ‎time-dependent problems‎, ‎such as the KdV equation and ‎Navier-Stokes equations‎, ‎where advection plays an important role‎. ‎We tried in  [9] to apply this approach to the KdV equation but‎ ‎constructing an implicit method which has unconditional stability‎ ‎and high order truncation error needs some considerable symbolic‎ ‎computations for extracting the coefficients of the scheme‎.

    Material and methods

    For constructing five-point implicit scheme‎ (3.32-3.33), we need to exploit Lagrange or B-spline interpolation method, ‎‎modified equation approach‎ and ‎local‎ ‎one-dimensional technique. The five-point implicit scheme is unconditional stable, has satisfactory order of convergence and its computational costs is low.

    Results and discussion

    Using the modified equation‎ ‎approach, some semi-Lagrangian schemes for solving a‎ ‎system of 2D Burgerschr('39') equations are developed here which are:A six-point explicit method which is conditionally stable ‎ and its order of truncation error is low,‎A six-point implicit method which‎ ‎has unconditional stability and its order of truncation error is not high‎,A five-point implicit method‎ which has‎ ‎unconditional stability, high order truncation error  and resonable computational complexity‎.

    Conclusion

    We encapsulate findings and conclusions of this research as follows:Our‎ ‎proposed scheme is a local one-dimensional scheme which‎ ‎obtained on the basis of the modified equation approach,Our semi-Lagrangian finite‎ ‎difference scheme is not limited by the‎ ‎Courant‎- ‎Friedrichs-Lewy (CFL) condition and therefore we can‎ ‎apply larger step size for the time variable,The five-point implicit method‎ proposed is a‎ high order ‎unconditionally stable method with resonable computational costs‎.

    Keywords: System of 2D Burgers' equations‎, ‎Semi-Lagrangian‎ ‎finite difference scheme‎, ‎Modified equation approach‎, Local‎ ‎one-dimensional approach‎‎
  • Navideh Modarresi, Saeid Rezakhah*, Shirin Shoaee Pages 465-476
    Introduction

    A flexible and tractable class of linear models is Autoregressive moving average (ARMA) process that are in effect of discrete noises.  The continuous time ARMA (CARMA) processes have wide applications in many data modeling where are more appropriate than discrete time models [1]. Specifically when the processes include high frequency, irregularly spaced data and or have missing observations. Many of these data show periodic structure in their squared log intraday returns [2]. In financial markets, variations and jumps play a critical role in asset pricing and volatilities models. The Levy-driven versions of these processes studied in [3]. The back-driving Levy process has two main components, the continuous variations part and the pure jump component [4]. The Levy-driven CARMA process described as the unique solution of some stochastic differential equation [5].  It is known that these family of CARMA processes are stationary or asymptotic stationary. The Levy processes have stationary increments while semi-Levy process have periodically stationary increments and are more realistic in many cases.  In this article, we study the semi-Levy driven CARMA processes.  We study the case where the back driving process is semi-Levy compound Poisson process.

    Semi-Levy CARMA Process:

    Presenting the structure of the semi-Levy processes and their characterization, we show that the semi-Levy driven CARMA process has periodic mean and covariance function.  To show this, we present some proper discretization for the process in which successive period intervals where the   period interval is  where  is the period. Then consider some predefined partition of all period intervals consist of   subintervals with different length but are the same for all period intervals.  The jump processes, say Poisson process, assumed to has fixed intensity parameter on each subinterval, say on  subinterval of each period interval, so has periodic property .  Then the semi-Levy compound Poisson process is defined by  where  is the semi-Levy Poisson process,  is some positive constant and the jumps with size  are iid random variables.  The state representation of the process is where the state equation is .We present the theoretical results and prove the periodically correlated structure of the process.We also investigate periodically correlated behavior for the simulated data  of the model.  Simulating the underlying measure and using discretization with 12 equally space samples in each period interval of the process, we divide the samples into corresponding 12 dimensional process for checking their stationarities.  Then we present the plot of the correlogram and the box plot of the corresponding multi-dimensional stationary processes and also corresponding cross-correlograms. The stationarity of these correspondence multivariate processes illustrates how this class of CARMA process is periodically correlated. 

    Conclusion

    The following conclusions were drawn from this research.The theoretical structure and state space representation of  CARMA process driven by semi-Levy compound Poisson process are obtained.The statistical properties and characteristics of the process are presented and it is shown that the process have periodically correlated structure.By simulated data and plotting the correlograms  and Box-plots for corresponding multi-dimensional process for the equally space discretization sample,  the periodic behavior of the process is verified.

    Keywords: Semi-Levy processes, CARMA models, Periodic behavior, Correlograms, Simulation analysis
  • Ahmad Molabahrami* Pages 477-486
    Introduction

    For the scientific study of a natural phenomenon it must be modeled. The resulting model is often expressed as a differential equation (DE), an integral equation (IE) or an integro-differential equation (IDE) or a system of these. Therefore, integral equations and their solutions play a major role in the fields of science and engineering. There are several numerical and analytical methods for solving above mentioned models. Analytical methods include homotopy and other methods that give the answer as a sequence or series. Two most important classes of numerical methods for integral equations are projection methods, including the Galerkin and collocation methods, and Nystrom methods. The degenerate kernel method (DKM) is a well-known classical method for solving Fredholm integral equations of the second kind, and it is one of the easiest numerical methods to define and analyze. This method for a given degenerate kernel is called direct computation method (DCM). In this paper, to investigate the multi-dimensional Fredholm integral equations of the second kind a modified degenerate kernel method (MDKM) is used. To construct the mentioned modification,  the source function is approximated by the same method which  employed to obtain a degenerate approximation of the kernel. Often dealing with nonlinear integral equations poses challenges, the most important of which is to find all the solutions or appropriate approximations of them. In this study, we demonstrate that for the linear and nonlinear equations that are "possible" to find the exact solution or solutions, the proposed method performs this well and without fail. The term "possibility" here is the solution to the equation under consideration, belonging to the subspace generated by the base. Also, the proposed method is able to provide an appropriate approximation of solutions that are not available for the "possibility" mentioned above.

    Material and methods

    In MDKM, the interpolation method is used to make a degenerate approximation of a non-degenerate kernel as well as source function. Lagrange polynomials are adopted for the interpolation. This method transforms an integral equation of the second kind, to a system of algebraic equations. The error and convergence of the algorithm are given strictly.

    Results and discussion

    The efficiency of the approach will be shown by applying the procedure on some prototype examples and then a comparison will be done with some other methods. The reported results demonstrate that the present method can obtain all the exact solutions for equations that have not previously been reported using other methods. Also, the results reported in the table of CPU time indicate that the computation cost of our method is very suitable.

    Conclusion

    The following conclusions were drawn from this research.It is possible to obtain all the exact solutions of an integral equation with a non-degenerate kernel.The presented method can give the closed form of the exact solution(s) of an integral equation.The presented method shows that the nonlinearity of an integral equation cannot change the form of the exact solution(s).

    Keywords: Multi-dimensional Fredholm integral equations of the second kind, Multi-dimensional Lagrange interpolation, Degenerate kernel method, Direct computation method, Modified degenerate kernel method
  • Tayebe Waezizadeh*, Tayebe Parsaei, Fereshte Fourozesh Pages 487-500
    Introduction

    One of the major challenges in supporting a growing human population is supplies of food. Plants play a major rule in providing human food. Hence, it is important to study plant diseases and provide appropriate models for describing the relationship between plant infection and its growth and reproduction. One of effective models that describes this relationship is mathematical model. One of the important aspects that the mathematical model can presented is the dynamic of the plant’s immune system.In this paper, a mathematical model for diffusion of infection in the host plant is introduced. The model is based on a differential equation system with two time delays. In this model, the host population of cells is divided into the classes of susceptible cells  consisting of mature cells and are susceptible to infection, infected cells  that spread the infection, recovered cells  that are no longer infectious and  are proliferating cells that become susceptible after reaching maturity. We consider two time delays,  and , in equations. The proliferating cells have the average maturity time , after which they are recruited to the susceptible class.  is the average time of antiviral effects.In the next sections of this paper, stability conditions of equilibrium points are investigated. In the last section, we consider a plant in two different modes, organic and non- organic. Then the solution curves are plotted with different time delays and compare solutions together.

    Material and methods

    In this scheme, first we explain the conditions of plant. Then, a mathematical model with two time delays is introduced. As follows, the dynamical behavior of the model is investigated. At the end of paper, we consider a plant with two different modes and plot the solution curves.

    Results and discussion

    We introduce a mathematical model which explain conditions of plant cells. In this model the independent variable is time, so the model is ODE with two time delays. As follows, using some theorems in dynamical systems, the dynamical behavior of the model is investigated. Using these results, we can provide  good conditions for a plant that epidemic does not happen. At the end, we use of Matlab software to plot the solution curves in two different conditions. The curves explain the behavior of plant cells when they are infectious.

    Conclusion

    The following conclusions were drawn from this research.A mathematical model which is introduced in this paper is more realistic than the previous models because, the grow rate of a plant is considered to be logistic.Theorems show that how we can control the virus to prevent epidemic outbreak.We plot solution curves for two different plants (organic and non-organic). Solution curves show that how the conditions of plant cells change by changing the parameters.

    Keywords: Mathematical model, Equilibrium point, Stability, Hopf bifurcation